17 research outputs found

    Irrational behavior of algebraic discrete valuations

    Full text link
    We study algebraic discrete valuations dominating normal local domains of dimension two. We construct a family of examples to show that the Hilbert-Samuel function of the associated graded ring of the valuation can fail to be asymptotically of the form: quasi-polynomial plus a bounded function. We also show that the associated multiplicity can be irrational, or even transcendental

    InteractE: Improving Convolution-based Knowledge Graph Embeddings by Increasing Feature Interactions

    Full text link
    Most existing knowledge graphs suffer from incompleteness, which can be alleviated by inferring missing links based on known facts. One popular way to accomplish this is to generate low-dimensional embeddings of entities and relations, and use these to make inferences. ConvE, a recently proposed approach, applies convolutional filters on 2D reshapings of entity and relation embeddings in order to capture rich interactions between their components. However, the number of interactions that ConvE can capture is limited. In this paper, we analyze how increasing the number of these interactions affects link prediction performance, and utilize our observations to propose InteractE. InteractE is based on three key ideas -- feature permutation, a novel feature reshaping, and circular convolution. Through extensive experiments, we find that InteractE outperforms state-of-the-art convolutional link prediction baselines on FB15k-237. Further, InteractE achieves an MRR score that is 9%, 7.5%, and 23% better than ConvE on the FB15k-237, WN18RR and YAGO3-10 datasets respectively. The results validate our central hypothesis -- that increasing feature interaction is beneficial to link prediction performance. We make the source code of InteractE available to encourage reproducible research.Comment: Accepted at AAAI 202

    Segmenting Scientific Abstracts into Discourse Categories: A Deep Learning-Based Approach for Sparse Labeled Data

    Full text link
    The abstract of a scientific paper distills the contents of the paper into a short paragraph. In the biomedical literature, it is customary to structure an abstract into discourse categories like BACKGROUND, OBJECTIVE, METHOD, RESULT, and CONCLUSION, but this segmentation is uncommon in other fields like computer science. Explicit categories could be helpful for more granular, that is, discourse-level search and recommendation. The sparsity of labeled data makes it challenging to construct supervised machine learning solutions for automatic discourse-level segmentation of abstracts in non-bio domains. In this paper, we address this problem using transfer learning. In particular, we define three discourse categories BACKGROUND, TECHNIQUE, OBSERVATION-for an abstract because these three categories are the most common. We train a deep neural network on structured abstracts from PubMed, then fine-tune it on a small hand-labeled corpus of computer science papers. We observe an accuracy of 75% on the test corpus. We perform an ablation study to highlight the roles of the different parts of the model. Our method appears to be a promising solution to the automatic segmentation of abstracts, where the labeled data is sparse.Comment: to appear in the proceedings of JCDL'202

    ASAP: Adaptive Structure Aware Pooling for Learning Hierarchical Graph Representations

    Full text link
    Graph Neural Networks (GNN) have been shown to work effectively for modeling graph structured data to solve tasks such as node classification, link prediction and graph classification. There has been some recent progress in defining the notion of pooling in graphs whereby the model tries to generate a graph level representation by downsampling and summarizing the information present in the nodes. Existing pooling methods either fail to effectively capture the graph substructure or do not easily scale to large graphs. In this work, we propose ASAP (Adaptive Structure Aware Pooling), a sparse and differentiable pooling method that addresses the limitations of previous graph pooling architectures. ASAP utilizes a novel self-attention network along with a modified GNN formulation to capture the importance of each node in a given graph. It also learns a sparse soft cluster assignment for nodes at each layer to effectively pool the subgraphs to form the pooled graph. Through extensive experiments on multiple datasets and theoretical analysis, we motivate our choice of the components used in ASAP. Our experimental results show that combining existing GNN architectures with ASAP leads to state-of-the-art results on multiple graph classification benchmarks. ASAP has an average improvement of 4%, compared to current sparse hierarchical state-of-the-art method.Comment: The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI 2020

    APOLLO: A Simple Approach for Adaptive Pretraining of Language Models for Logical Reasoning

    Full text link
    Logical reasoning of text is an important ability that requires understanding the information present in the text, their interconnections, and then reasoning through them to infer new conclusions. Prior works on improving the logical reasoning ability of language models require complex processing of training data (e.g., aligning symbolic knowledge to text), yielding task-specific data augmentation solutions that restrict the learning of general logical reasoning skills. In this work, we propose APOLLO, an adaptively pretrained language model that has improved logical reasoning abilities. We select a subset of Wikipedia, based on a set of logical inference keywords, for continued pretraining of a language model. We use two self-supervised loss functions: a modified masked language modeling loss where only specific parts-of-speech words, that would likely require more reasoning than basic language understanding, are masked, and a sentence-level classification loss that teaches the model to distinguish between entailment and contradiction types of sentences. The proposed training paradigm is both simple and independent of task formats. We demonstrate the effectiveness of APOLLO by comparing it with prior baselines on two logical reasoning datasets. APOLLO performs comparably on ReClor and outperforms baselines on LogiQA. The code base has been made publicly available.Comment: Accepted at ACL 2023, code available at https://github.com/INK-USC/APOLL

    A Feature Weighting Technique on SVM for Human Action Recognition

    Get PDF
    626-630Human action recognition is a challenging research topic and attracted very good attention in the last few years. This paper presents a features weighting framework for human action recognition based on the movement of different body-parts. Intuitively, Understanding the motion of a particular body-part having a major contribution to a specific action gives a better representation of that human activity. For example, action like walking, running and jogging, movement of the leg is more important and in boxing, waving and clapping, hand movement is more effective. This work presents a technique, utilizing the sub-region body-parts recognition rate to the weight kernel function. First, the complete human body is extracted from the background and HOG (histogram of gradient) based body-part detection is applied to generate three different sub-region (head, arm and body, foot and leg) of complete human body. Recognition rate and weight is calculated for all these sub-region (body-parts) for a particular action. Based on the weight (ω) of sub-region, a weighted feature Gaussian kernel function is obtained and weighted feature support vector machine (WF-SVM) classifier is constructed. The experimental results of the proposed framework have better performance on both KTH and UCF-ARG datasets compared against several state-of-the-art methods
    corecore